One of the most difficult challenges when assembling a telescope is aligning it to optical precision. If you don’t do it correctly, all your images will be fuzzy. This is particularly challenging when you assemble your telescope in space, as the James Webb Space Telescope (JWST) demonstrates.
Unlike the Hubble Space Telescope, the JWST doesn’t have a single primary mirror. To fit in the launch rocket, it had to be folded, then assembled after launch. For this reason and others, JWST’s primary reflector is a set of 18 hexagonal mirror segments. Each segment is only 1.3-meters wide, but when aligned properly, they act effectively as a single 6.5-meter mirror. It’s an effective way to build a larger space telescope, but it means the mirror assembly has to be focused in space.
To achieve this, each mirror segment has a set of actuators that can shift the segment along six axes of alignment. They are focused using a wavefront phase technique. Since light behaves as a wave, when two beams of light overlap, the waves create an interference pattern. When the mirrors are aligned properly, the waves of light from each mirror segment also align, creating a sharp focus.
The primary mirrors of Hubble and JWST compared. Credit: Wikipedia user BobarinoFor JWST, its Near Infrared Camera (NIRCam) is equipped with a wavefront camera. To align the mirrors, the JWST team points NIRCam at a star, then intentionally moves the mirrors out of alignment. This gives the star a blurred diffraction look. The team then positions the mirrors to focus the star, which brings them into alignment.
This was done to align the mirrors soon after JWST was launched. But due to vibrations and shifts in temperature, the mirror segments slowly drift out of alignment. Not by much, but enough that they need to be realigned occasionally. To keep things proper, the team typically does a wavefront error check every other day. There is also a small camera aimed at the mirror assembly, so the team can take a “selfie” to monitor the condition of the mirrors.
The JWST was designed to maintain a wavefront error of 150 nanometers, but the team has been able to maintain a 65 nanometer error. It’s an astonishingly tight alignment for a space telescope, which allows JWST to capture astounding images of the most distant galaxies in the observable universe.
You can learn more about this technique on the NASA Blog.
The post How Webb Stays in Focus appeared first on Universe Today.
Mass purges and prosecutions of scientists have happened before. We shouldn't pretend they can't happen here.
The post Dr. Vinay Prasad: “I Don’t Believe in Forgiveness Because in My Opinion These Pieces of Shit Are Still Lying.” first appeared on Science-Based Medicine.Astronauts on the International Space Station generate their share of garbage, filling up cargo ships that then deorbit and burn up in the atmosphere. Now Sierra Space has won a contract to build a trash compactor for the space station. The device will compact space trash by 75% in volume and allow water and other gases to be extracted for reclamation. The resulting garbage blocks are easily stored and could even be used as radiation shielding on long missions.
Called the Trash Compaction and Processing System (TCPS), plans are to test it aboard the International Space Station in late 2026.
Sierra Space said this technology could be critical for the success of future space exploration — such as long-duration crewed missions to the Moon and Mars — to handle waste management, stowage, and water reclamation.
“Long-term space travel requires the efficient use of every ounce of material and every piece of equipment. Every decision made on a spacecraft can have far-reaching consequences, and waste management becomes a matter of survival and mission integrity in the vacuum of space,” said Sierra Space CEO, Tom Vice, in a press release. “We’re addressing this challenge through technological innovation and commitment to sustainability in every facet of space operations. Efficient, sustainable, and innovative waste disposal is essential for the success of crewed space exploration.”
A sample trash tile, compressed to less than one-eighth of the original trash volume, was produced by the Heat Melt Compactor. Credit: NASA.NASA said that currently aboard the International Space Station (ISS), common trash such as food packaging, clothing, and wipes are separated into wet and dry trash bags; these bags are stored temporarily before being packed into a spent resupply vehicle, such as the Russian Progress ship or Northrup Grumman’s Cygnus vehicle. When full, these ships undock and burn up during atmospheric re-entry, taking all the trash with it.
However, for missions further out into space trash will have to be managed and disposed of by other methods, such as jettisoning the trash into space – which doesn’t sound like a very eco-friendly idea. Additionally, wet trash contains components that may not be storable for long periods between jettisoning events without endangering the crew.
Plus, there’s currently no way for any water to be reclaimed from the “wet” waste. The TCPS should be able to recover nearly all the water from the trash for future use.
TCPS is a stand-alone system and only requires access to power, data, and air-cooling interfaces. It is being designed as simple to use.
Sierra Space said the device includes an innovative Catalytic Oxidizer (CatOx) “that processes volatile organic compounds (VOCs) and other gaseous byproducts to maintain a safe and sterile environment in space habitats.” Heat and pressure compacts astronaut trash into solid square tiles that compress to less than one-eighth of the original trash volume. The tiles are easy to store, safe to handle, and have the added — and potentially very important — benefit of providing additional radiation protection.
Sierra Space was originally awarded a contract in 2023, and in January 2024 they completed the initial design and review phase, which was presented to NASA for review. Sierra Space is now finalizing the fabrication, integration, and checkout of the TCPS Ground Unit, which will be used for ground testing in ongoing system evaluations. Based on the success of their design, Sierra Space was now awarded a new contract to build a Flight Unit that will be launched and tested in orbit aboard the space station.
NASA said that once tested on the ISS, the TCPS can be used for exploration missions wherever common spacecraft trash is generated and needs to be managed.
The post A Trash Compactor is Going to the Space Station appeared first on Universe Today.
The most amazing thing about light is that it takes time to travel through space. Because of that one simple fact, when we look up at the Universe we see not a snapshot but a history. The photons we capture with our telescopes tell us about their journey. This is particularly true when gravity comes into play, since gravity bends and distorts the path of light. In a recent study, a team shows us how we might use this fact to better study black holes.
Near a black hole, our intuition about the behavior of light breaks down. For example, if we imagine a flash of light in empty space, we understand that the light from that flash expands outward in all directions, like the ripples on a pond. If we observe that flash from far away, we know the light has traveled in a straight line to reach us. This is not true near a black hole.
The gravity of a black hole is so intense that light never travels in a straight line. If there is a flash near a black hole, some of the light will travel directly to us, but some of the light will travel away from us, only to be gravitationally swept around the backside of the black hole to head in our direction. Some light will make a full loop around the black hole before reaching is. Or two loops, or three. With each path, the light travels a different distance to reach us, and therefore reaches us at a different time. Rather than observing a single flash, we wound see echoes of the flash for each journey.
In principle, since each echo is from a different path, the timing of these echoes would allow us to map the region around a black hole more clearly. The echoes would tell us not just the black hole’s mass and rotation; they would also allow us to test the limits of general relativity. The only problem is that with current observations, the echoes wash together in the data. We can’t distinguish different echoes.
This is where this new study comes in. The team propose observing a black hole with two telescopes, one on Earth and one in space. Each telescope would have a slightly different view of the black hole. Through long baseline interferometry the two sets of data could be correlated to distinguish the echoes. In their work the team ran tens of thousands of simulations of light echoes from a supermassive black hole similar to the one in the M87 galaxy. They demonstrated that interferometry could be used to find correlated light echoes.
It would be a challenge to build such an interferometer, but it would be well within our engineering capabilities. Perhaps in the future, we will be able to observe echoes of light to explore black holes and some of the deepest mysteries of gravity.
Reference: Wong, George N., et al. “Measuring Black Hole Light Echoes with Very Long Baseline Interferometry.” The Astrophysical Journal Letters 975.2 (2024): L40.
The post Using Light Echoes to Find Black Holes appeared first on Universe Today.
Placing a mass driver on the Moon has long been a dream of space exploration enthusiasts. It would open up so many possibilities for the exploration of our solar system and the possibility of actually living in space. Gerard O’Neill, in his work on the gigantic cylinders that now bear his name, mentioned using a lunar mass driver as the source of the material to build them. So far, we have yet to see such an engineering wonder in the real world, but as more research is done on the topic, more and more feasible paths seem to be opening up to its potential implementation.
One recent contribution to that effort is a study by Pekka Janhunen of the Finnish Meteorological Institute and Aurora Propulsion Technologies, a maker of space-based propulsion systems. He details how we can use quirks of lunar gravity to use a mass driver to send passive loads to lunar orbit, where they can then be picked up with active, high-efficiency systems and sent elsewhere in the solar system for processing.
Anomalies in the Moon’s gravitational field have been known for some time. Typically, mission planners view them as a nuisance to be avoided, as they can cause satellite orbits to degrade more quickly than expected by nice, simple models. However, according to Dr. Janhunen, they could also be a help rather than a hindrance.
Mass drivers have been popular in science fiction for some time.Typical models of using lunar mass drivers focus on active or passive payloads sent into lunar orbit. Active payloads require some onboard propulsion system to get them to where they are going. Therefore, these payloads require more active technology and some form of propellant, which diminishes the total amount available for use elsewhere in the solar system.
On the other hand, passive payloads will typically end up in one of two scenarios. Either they make one lunar orbit in about one day and then deorbit back to the lunar surface, or they end up in a highly randomized orbit and essentially end up as lunar space junk. Neither of those solutions would be sustainable for significant mass movement off the lunar surface.
Dr. Janhunen may have found a solution, though. He studied the known lunar gravitational anomalies found by GRAIL. This satellite mapped the Moon’s gravity in great detail and found several places on the lunar surface where a mass driver could potentially launch a passive payload into an orbit that would last up to nine days. These places are along the sides of mountains, and three of them are on the side of the lunar surface facing Earth. Importantly, all of them have their gravitational quirks.
The Artemis missions might be our best chance in the coming decades to build a mass driver on the Moon – Fraser discusses their details here.More time in orbit would mean more time for an active tug to grab hold of the passive lunar payload and take it to a processing station, such as a space station at the L5 point between Earth and the Moon. This active tug could be reusable, have a highly efficient electrical propulsion system developed and built on Earth, and only need to be launched once.
All that would be required for the system to work would be a mass driver that could accelerate a payload up to a lunar orbital velocity of about 1.7 km/s. That is well within our capabilities to build with existing technologies, but it would require a massive engineering effort far beyond anything we have built-in space so far. However, every study that shows a potential increased benefit or lowered cost to eventually exploiting the resources of our nearest neighbor to expand our reach into the solar system takes us one step closer to making that a reality.
Learn More:
P Janhunen – Launching mass from the Moon helped by lunar gravity anomalies
UT – Moonbase by 2022 For $10 Billion, Says NASA
UT – NASA Wants to Move Heavy Cargo on the Moon
NSS – L5 News: Mass Driver Update
Lead Image:
DALL-E illustration of a lunar electromagnetic launcher
The post Launching Mass From the Moon Helped by Lunar Gravity Anomalies appeared first on Universe Today.
Massive stars about eight times more massive than the Sun explode as supernovae at the end of their lives. The explosions, which leave behind a black hole or a neutron star, are so energetic they can outshine their host galaxies for months. However, astronomers appear to have spotted a massive star that skipped the explosion and turned directly into a black hole.
Stars are balancing acts between the outward force of fusion and the inward force of their own gravity. When a massive star enters its last evolutionary stages, it begins to run out of hydrogen, and its fusion weakens. The outward force from its fusion can no longer counteract the star’s powerful gravity, and the star collapses in on itself. The result is a supernova explosion, a calamitous event that destroys the star and leaves behind a black hole or a neutron star.
However, it appears that sometimes these stars fail to explode as supernovae and instead turn directly into black holes.
New research shows how one massive, hydrogen-depleted supergiant star in the Andromeda galaxy (M31) failed to detonate as a supernova. The research is “The disappearance of a massive star marking the birth of a black hole in M31.” The lead author is Kishalay De, a postdoctoral scholar at the Kavli Institute for Astrophysics and Space Research at MIT.
These types of supernovae are called core-collapse supernovae, also known as Type II. They’re relatively rare, with one occurring about every one hundred years in the Milky Way. Scientists are interested in supernovae because they are responsible for creating many of the heavy elements, and their shock waves can trigger star formation. They also create cosmic rays that can reach Earth.
This new research shows that we may not understand supernovae as well as we thought.
Artist’s impression of a Type II supernova explosion. These supernovae explode when a massive star nears the end of its life and leaves behind either a black hole or a neutron star. But sometimes, the supernova fails to explode and collapses directly into a black hole. Image Credit: ESOThe star in question is named M31-2014-DS1. Astronomers noticed it brightening in mid-infrared (MIR) in 2014. For one thousand days, its luminosity was constant. Then, for another thousand days between 2016 and 2019, it faded dramatically. It’s a variable star, but that can’t explain these fluctuations. In 2023, it was undetected in deep optical and near-IR (NIR) imaging observations.
The researchers say that the star was born with an initial mass of about 20 stellar masses and reached its terminal nuclear-burning phase with about 6.7 stellar masses. Their observations suggest that the star is surrounded by a recently ejected dust shell, in accordance with a supernova explosion, but there’s no evidence of an optical outburst.
“The dramatic and sustained fading of M31-2014-DS1 is exceptional in the landscape of variability in massive, evolved stars,” the authors write. “The sudden decline of luminosity in M31-2014-DS1 points to the cessation of nuclear burning together with a subsequent shock that fails to overcome the infalling material.” A supernova explosion is so powerful that it completely overcomes infalling material.
“Lacking any evidence for a luminous outburst at such proximity, the observations of M31-2014-DS1 bespeak signatures of a ‘failed’ SN that leads to the collapse of the stellar core,” the authors explain.
What could make a star fail to explode as a supernova, even if it’s the right mass to explode?
Supernovae are complex events. The density inside a collapsing core is so extreme that electrons are forced to combine with protons, creating both neutrons and neutrinos. This process is called neutronization, and it creates a powerful burst of neutrinos that carries about 10% of the star’s rest mass energy. The outburst is called a neutrino shock.
Neutrinos get their name from the fact that they’re electrically neutral and seldom interact with regular matter. Every second, about 400 billion neutrinos from our Sun pass right through every person on Earth. But in a dense stellar core, the neutrino density is so extreme that some of them deposit their energy into the surrounding stellar material. This heats the material, which generates a shock wave.
The neutrino shock always stalls, but sometimes it revives. When it revives, it drives an explosion and expels the outer layer of the supernova. If it’s not revived, the shock wave fails, and the star collapses and forms a black hole.
This image illustrates how the neutrino shock wave can stall, leading to a black hole without a supernova explosion. A shows the initial shock wave with cyan lines representing neutrinos being emitted and the red circle representing the shock wave propagating outward. B shows the neutrino shock stalling, with white arrows representing infalling matter. The outer layers fall inward, and the neutrino heating isn’t powerful enough to revive the shock. C shows the failed shock dissipating as a dotted red line and the stronger white arrows represent the collapse accelerating. The outer layers are falling in rapidly, and the core is becoming more compact. D shows the black hole forming, with the blue circle representing the event horizon and the remaining material forming an accretion disk. (Credit: Original illustration created for this article.)In M31-2014-DS1, the neutrino shock was not revived. The researchers were able to constrain the amount of material ejected by the star, and it was far below what a supernovae would eject. “These constraints imply that the majority of stellar material (?5 solar masses) collapsed into the core, exceeding the maximum mass of a neutron star (NS) and forming a BH,” they conclude. About 98% of the star’s mass collapsed and created a black hole with about 6.5 solar masses.
M31-2014-DS1 isn’t the only failed supernova, or candidate failed supernova, that astronomers have found. They’re difficult to spot because they’re characterized by what doesn’t happen rather than what does. A supernova is hard to miss because it’s so bright and appears in the sky suddenly. Ancient astronomers recorded several of them.
In 2009, astronomers discovered the only other confirmed failed supernova. It was a supergiant red star in NGC 6946, the “Fireworks Galaxy.” It’s named N6946-BH1 and has about 25 solar masses. After disappearing from view, it left only a faint infrared glow. In 2009, its luminosity increased to a million solar luminosities, but by 2015, it had disappeared in optical light.
A survey with the Large Binocular Telescope monitored 27 nearby galaxies, looking for disappearing massive stars. The results suggest that between 20% and 30% of massive stars can end their lives as failed supernovae. However, M31-2014-DS1 and N6946-BH1 are the only confirmed observations.
The post A Star Disappeared in Andromeda, Replaced by a Black Hole appeared first on Universe Today.
About half a century ago, astronomers theorized that the Solar System is situated in a low-density hot gas environment. This hot gas emits soft X-rays that displace the dust in the local interstellar medium (ISM), creating what is known as the Local Hot Bubble (LHB). This theory arose to explain the ubiquitous soft X-ray background (below 0.2 keV) and the lack of dust in our cosmic neighborhood. This theory has faced some challenges over the years, including the discovery that solar wind and neutral atoms interact with the heliosphere, leading to similar emissions of soft X-rays.
Thanks to new research by an international team of scientists led by the Max Planck Institute for Extraterrestrial Physics (MPE), we now have a 3D model of the hot gas in the Solar System’s neighborhood. Using data obtained by the eROSITA All-Sky Survey (eRASS1), they detected large-scale temperature differences in the LHBT that indicate that the LHB must exist, and both it and solar wind interaction contribute to the soft X-ray background. They also revealed an interstellar tunnel that could possibly link the LHB to a larger “superbubble.”
The research was led by Michael C. H. Yeung, a PhD student at the MPE who specializes in the study of high-energy astrophysics. He was joined by colleagues from the MPE, the INAF-Osservatorio Astronomico di Brera, the University of Science and Technology of China, and the Dr. Karl Remeis Observatory. The paper that details their findings, “The SRG/eROSITA diffuse soft X-ray background,” was published on October 29th, 2024, by the journal Astronomy & Astrophysics.
This image shows half of the X-ray sky projected onto a circle with the center of the Milky Way on the left and the galactic plane running horizontally. Credit ©: MPE/J. Sanders/eROSITA consortiumThe eROSITA telescope was launched in 2019 as part of the Russian–German Spektr-RG space observatory. It is the first X-ray observatory to observe the Universe beyond Earth’s geocorona, the outermost region of the Earth’s atmosphere (aka. the exosphere), to avoid contamination by the latter’s high-ultraviolet light. In addition, the eROSITA All-Sky Survey (eRASS1) was timed to coincide with the solar minimum, thus reducing contamination by solar wind charge exchanges.
For their study, the team combined data from the eRASS1 with data from eROSITA’s predecessor, the X-ray telescope ROSAT (short for Röntgensatellit). Also built by the MPE, this telescope complements the eROSITA spectra by detecting X-rays with energies lower than 0.2 keV. The team focused on the LHB located in the western Galactic hemisphere, dividing it into about 2000 regions and analyzing the spectra from each. Their analysis showed a clear temperature difference between the parts of the LHB oriented towards Galactic South (0.12 keV; 1.4 MK) and Galactic North (0.10 keV; 1.2 MK).
According to the authors, this difference could have been caused by supernova explosions that expanded and reheated the Galactic South portion of the LHB in the past few million years. Yeung explained in an MPE press release: “In other words, the eRASS1 data released to the public this year provides the cleanest view of the X-ray sky to date, making it the perfect instrument for studying the LHB.”
In addition to obtaining temperature data from the diffuse X-ray background spectra information, the combined data also provided a 3D structure of the hot gas. In a previous study, Yeung and his colleagues examined eRASS1 spectra data from almost all directions in the western Galactic hemisphere. They concluded that the density of the hot gas in the LHB is relatively uniform. Relying on this previous work, the team generated a new 3D model of the LHB from the measured intensity of X-ray emissions.
A 3D interactive view of the LHB and the solar neighborhood, Credit: MPEThis model shows that the LHB extends farther toward the Galactic poles than expected since the hot gas tends to follow the path of least resistance (away from the Galactic disc). Michael Freyberg, a core author of this work, was a part of the pioneering work in the ROSAT era three decades ago. As he explained:
“This is not surprising, as was already found by the ROSAT survey. What we didn’t know was the existence of an interstellar tunnel towards Centaurus, which carves a gap in the cooler interstellar medium (ISM). This region stands out in stark relief thanks to the much-improved sensitivity of eROSITA and a vastly different surveying strategy compared to ROSAT.”
These latest results suggest the Centaurus tunnel may be a local example of a wider hot ISM network sustained by supernovae and solar wind-ISM interaction across the Galaxy. While astronomers have theorized the existence of the Centaurus tunnel since the 1970s, it has remained difficult to prove until now. The team also compiled a list of known supernova remnants, superbubbles, and dust and used these to create a 3D model of the Solar System’s surroundings. The new model allows astronomers to better understand the key features in the representation.
These include the Canis Major tunnel, which may connect the LHB to the Gum Nebula (the red globe) or the long grey superbubble (GSH238+00+09). Dense molecular clouds, represented in orange, are shown near the surface of the LHB in the direction of the Galactic Center (GC). Recent work suggests these clouds are moving away from the Solar System and likely formed from the condensation of materials swept up during the early formation of the LHB. Said Gabriele Ponti, a co-author of this work:
“Another interesting fact is that the Sun must have entered the LHB a few million years ago, a short time compared to the age of the Sun. It is purely coincidental that the Sun seems to occupy a relatively central position in the LHB as we continuously move through the Milky Way.”
Further Reading: MPE, Astronomy & Astrophysics
The post eROSITA All-Sky Survey Takes the Local Hot Bubble’s Temperature appeared first on Universe Today.
In his new book Abortion in the Age of Unreason: A Doctor’s Account of Caring for Women Before and After Roe v. Wade, a nationally prominent doctor reports the daily challenges of offering and receiving abortion services in a volatile political and social atmosphere. In stories from the front lines–from protecting patients and staff from protesters’ attacks to the dangers to women of restricted access to abortion services, and the pertinent findings of his remote research in Latin America, Hern’s book is strikingly detailed just as it exposes the needs of women and the U.S. national interest. Dr. Hern–an abortion specialist, researcher, scholar, and highly visible public advocate – shows how abortion saves women’s lives given the many risks that arise during pregnancy, more than most people realize. He points to political and national solutions to reverse a reawakened crisis that now threatens democracy. Throughout the book, Dr. Hern shows how the current emergency was largely created by political actors who have exploited and distorted the abortion issue to increase and consolidate their power.
A vital component of women’s health care, the crisis over abortion is not new. Yet the reversal of Roe v. Wade and the steady accumulation of power by America’s right wing has put the issue at a level of urgency and national prominence not seen since the days before legalization. Women’s need for safe abortion services will continue as the struggle to secure their rights intensifies.
Warren M. Hern, M.D., is known to the public through his many appearances on CNN, Rachel Maddow/MSNBC, Sixty Minutes, and in the pages of The Atlantic magazine, The New York Times, Washington Post, and dozens more media. A scientist, Hern wrote about the need for safe abortion services before the 1973 Roe v. Wade decision and was present at the first Supreme Court arguments. In his research and medical work, he pioneered since 1973 the modern safe practice of early and late abortion in his highly influential books and scholarship. A tireless national activist for women’s reproductive rights, he is an adjunct professor of anthropology at the University of Colorado, Boulder, and holds a clinical appointment in obstetrics and gynecology at the University of Colorado medical center. He holds doctorates in medicine and epidemiology. His book is Abortion in the Age of Unreason: A Doctor’s Account of Caring for Women Before and After Roe v. Wade.
Shermer and Hern discuss:
Why women get abortions—2013 study “Understanding Why Women Seek Abortions in the US” (BMC Women’s Health Antonia Biggs, Heather Gould, Diana Greene Foster):
The top three reason categories cited in both studies were: 1) “Having a baby would dramatically change my life” (i.e., interfere with education, employment and ability to take care of existing children and other dependents) (74% in 2004 and 78% in 1987), 2) “I can’t afford a baby now” (e.g., unmarried, student, can’t afford childcare or basic needs) (73% in 2004 and 69% in 1987), and 3) “I don’t want to be a single mother or am having relationship problems” (48% in 2004 and 52% in 1987). A sizeable proportion of women in 2004 and 1987 also reported having completed their childbearing (38% and 28%), not being ready for a/another child (32% and 36%), and not wanting people to know they had sex or became pregnant (25% and 33%)
What about medical problems with the woman or the fetus? Lozier Institute 2024 study:
Worldwide (Guttmacher Institute):
Americans’ Self-ID on Abortion, 2024 (Gallup):
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
To end, the week, we have some stunning videos from BBC Earth and other places, depicting the migratory behavior of the red crabs of Christmas Island, a small Australian territory (135 km², pop. 1692 ) near Indonesia: encircled below. Their vernacular name is The Christmas Island red crab, the Latin binomial is Gecarcoidea natalis, and they are endemic to that island and the Cocos (Keeling) Islands in the Indian Ocean.
TUBS, CC BY-SA 3.0, via Wikimedia CommonsThe life cycle of this crustacean is described here, but the videos are more impressive. Here are the basic facts:
The migration starts with the first rainfall of the wet season. This is usually in October or November, but can sometimes be as late as December or January.
Red crabs all over the island leave their homes at the same time and start marching towards the ocean to mate and spawn. Male crabs lead the migration and are joined by females along the way.
The exact timing and speed of the migration is determined by the phase of the moon. Red crabs always spawn before dawn on a receding high-tide during the last quarter of the moon. Incredibly, they know exactly when to leave their burrows to make this lunar date.
However, because crabs wait until the first rainfall to start their trek, they sometimes have to hurry. If the rains arrive close to the optimal spawning date, they will move rapidly. But if the rain comes early they may take their time, stopping to eat and drink on their way to the coast.
If it begins raining too late to make the spawning date, some crabs will stay in their burrows and migrate the following month instead.
And from BBC Earth, narrated by Attenborough. Note that the crabs breathe through gills, which must be kept moist. They live on land, but in moist habitats, but a remnant of their evolutionary origin is their need to go back to the ocean to spawn. Note that they walk sideways.
Note that they mate during the migration, too, so it’s not just females who are drawn to the sea at this time. A similar video, but also showing one of their predators and some of the other dangers they face.
I especially like this video because it shows how the island’s human inhabitants care for and protect the crabs:
After Trump won the election, Laura Helmuth, editor-in-chief of Scientific American, went ballistic on BlueSky (Twitter for progressives). She issued the three posts below, decrying her generation for being “fucking fascists” and telling some of her high-school classmates to “fuck them to the moon and back” (note to editor: “moon” is usually capitalized).
I have to say that this sounds a bit like Helmuth was a bit tipsy, but I won’t blame alcohol for this. After all, if you’re drunk, you’d better stay away from social media! I wrote about these “tweets”, and about Scientific American‘s “progressive” editorial slant, in a piece I posted yesterday. (This is part of a long series of posts I’ve done about her and the magazine.)
At any rate, after what must have been a bunch of pushback, and perhaps realizing that her job was in jeopardy, Helmuth issued an abject response yesterday, to wit:
After making a series of fiercely ideological and political statements on social media in the wake of Trump’s win and being pilloried for doing so, @SciAm editor in chief Laura Helmuth is now back pedaling. She claims she is committed to editorial objectivity. pic.twitter.com/qwTsaiyKLE
— Benjamin Ryan (@benryanwriter) November 7, 2024
Well, I’m trying to be more charitable these days, striving to put myself in my opponents’ shoes and imputing to them the best motives I can think of, but I couldn’t do it this time. And that’s because Helmuth has left a paper trail during her editorship—a paper trail of progressive leftism and wokeness that has demonized many people (including Mendel!) as racists and bigots. Thus I’m not convinced by her assertion that she “respects and values people across the political spectrum.” No, she seems to despise people on the right, and that’s what came out in her first set of tweets above.
Further, what is the “mistake” here? She’s is the editor of a major magazine, for crying out loud, and should know how to control herself. “Shock and confusion” doesn’t, at least to me, excuse her behavior. “Shock,” perhaps, but what is she “confused” about?
Her statement that her unhinged tweets “do not reflect the position of Scientific American or my colleagues,” really means, of course, “Please don’t fire me! I’ll be a good girl from now on,” I doubt, however, that her bosses at Springer really care about her eroding reputation. They probably care more about the bottom line, and I have no idea how the magazine is doing.
The sentence that irked me the most is “I am committed to civil communication and editorial objectivity.” Indeed! The whole magazine has violated both tenets for years. It gave Michael Shermer a pink slip for simply questioning accepted (woke) wisdom in his column, and couldn’t wait to accuse E. O. Wilson of racism, nearly before his body had gone cold. The many biased and slanted columns do not bespeak Helmuth’s commitment to objectivity, and here’s one example that I mentioned yesterday.
After the magazine published its hit piece on E. O. Wilson, accusing him (as well as Mendel and others) of racism, thirty evolutionary biologists and I cobbled together a letter to Scientific American, rebutting the hit piece’s claims and defending Wilson and his legacy (you can see the letter here). Helmuth rejected the letter. She also rejected my personal appeal to “consider an op-ed about how extreme Leftist progressivism is besmirching science itself by distorting the truth? (Example: arguments that sex is not bimodal in humans, but forms a continuum.) I could make a number of arguments like that about biology that, contra McLemore, have truth behind them.” That letter didn’t fly, but Luana Maroja and I turned the idea into a paper for Skeptical Inquirer.
So much for Helmuth’s editorial objectivity!
Unfortunately, the readers are almost unanimously unimpressed by the apology. Go see for yourself, but I’ll put up a few screenshots of responses: